Goto

Collaborating Authors

 luciano floridi


AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models by Luciano Floridi :: SSRN

#artificialintelligence

The article argues that these LLMs can process texts with extraordinary success and often in a way that is indistinguishable from human output, while lacking any intelligence, understanding or cognitive ability. It also highlights the limitations of these LLMs, such as their brittleness (susceptibility to catastrophic failure), unreliability (false or made-up information), and the occasional inability to make elementary logical inferences or deal with simple mathematics. The article concludes that LLMs, represent a decoupling of agency and intelligence. While extremely powerful and potentially very useful, they should not be relied upon for complex reasoning or crucial information, but could be used to gain a deeper understanding of a text's content and context, rather than as a replacement for human input. The best author is neither an LLM nor a human being, but a human being using an LLM proficiently and insightfully.


Descriptive AI Ethics: Collecting and Understanding the Public Opinion

Lima, Gabriel, Cha, Meeyoung

arXiv.org Artificial Intelligence

As we start to encounter AI systems in various morally and legally salient environments, some have begun to explore how the current responsibility ascription practices might be adapted to meet such new technologies [19, 33]. A critical viewpoint today is that autonomous and self-learning AI systems pose a so-called responsibility gap [27]. These systems' autonomy challenges human control over them [13], while their adaptability leads to unpredictability. Hence, it might infeasible to trace back responsibility to a specific entity if these systems cause any harm. Considering responsibility practices as the adoption of certain attitudes towards an agent [40], scholarly work has also posed the question of whether AI systems are appropriate subjects of such practices [15, 29, 37] -- e.g., they might "have a body to kick," yet they "have no soul to damn" [4].


The Chinese Approach to Artificial Intelligence: An Analysis of Policy and Regulation by Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang, Luciano Floridi :: SSRN

#artificialintelligence

In July 2017, China's State Council released the country's strategy for developing artificial intelligence (AI), entitled'New Generation Artificial Intelligence Development Plan' (新一代人工智能发展规划). This strategy outlined China's aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan ($150 billion) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China's AI policies or have assessed the country's technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China's AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use.


Technology needs ethics. Oxford philosopher Luciano Floridi explains why

#artificialintelligence

Luciano Floridi has never been kind with technology. In '95, when the web as we know it today did not exist and he was a PhD Philosophy student, he wrote things such as: «No one controls the system globally, and the very structure of the internet ensures that will ever be able to control it in the future». Or: «the Internet promotes the growth of knowledge while creating forms of unprecedented ignorance». He directs the Digital Ethics Lab at the University of Oxford, he is the president of the Data Ethics Group of the Alan Turing Institute. And he serves as advisor to big tech, governments and the European Union.


From What to How. An Overview of AI Ethics Tools, Methods and Research to Translate Principles into Practices

Morley, Jessica, Floridi, Luciano, Kinsey, Libby, Elhalal, Anat

arXiv.org Artificial Intelligence

However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles--the'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)--rather than on practices, the'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.


Episodes - The Machine Ethics Podcast

#artificialintelligence

This month I'm talking with Miranda Mowbray talking about: cyber security and machine learning, big data ethical code of conduct, sitting down as a team to discuss ethical issues in data projects, respecting the people who's data you might be using, not collecting data you don't need and deleting things, and much more. This is a very special episode of interviews with various participants of this year's A.I. retreat at Juvet, Norway. We also dive into a general framework for machine ethics, contractarianism, Rawls' original position thought experiment (which is one of my favourite ethical thought experiments), maximin function approach to machine ethics, and whether robots should respect the consent of a person in life threatening circumstances... This month is (sort of) part 2 of our two part look at AI in Culture. Chris and I take an extended look at how science fiction portray technology from the realistic to law of nature breaking mythos.


Voices in AI – Episode 65: A Conversation with Luciano Floridi

#artificialintelligence

Today's leading minds talk AI with host Byron Reese Episode 65 of Voices in AI features host Byron Reese and Luciano Floridi discuss ethics, information, AI and government monitoring. They also dig into Luciano's new book "The Fourth Revolution" and ponder how technology will disrupt the job market in the days to come. Luciano Floridi holds multiple degrees including a PhD in philosophy and logic from the University of Warwick. Luciano currently is a professor of philosophy and ethics of information, as well as the director of Digital Ethics Lab at the University of Oxford. Along with his responsibilities as a professor, Luciano is also the chair of the Data Ethics Group at the Alan Turing Institute.

  Country: Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
  Genre: Personal > Interview (0.51)

Why We Need to be Mindful of Who Programs AI

#artificialintelligence

One thing Artificial Intelligence can't be is prejudiced. It should be impossible; machines don't suddenly decide to hate, they're all about the facts. But what if the people programming them are prejudiced themselves? A disturbing new report in Science reveals that some are inadvertently doing just that. Who remembers Microsoft's Tay, a 2016 chatbot designed to ape the verbal machinations of a 19-year-old American girl? The high-minded idea behind it was to, according to Microsoft, "conduct research on conversational understanding."


Charting Our AI Future by Luciano Floridi

#artificialintelligence

Intelligence is the reason of everything. We, humans, have been insofar carriers and servers of intelligence. Since the beginning when we first became intelligent opposed to other species. We are now reaching our final stage as servers and carriers of intelligence. AI is gradually taking over.


Automation will create new needs, new jobs, says Luciano Floridi

#artificialintelligence

What will the world of technology look like 30 years from now? Megatech: Technology In 2050 tries to tackle this question. Edited by The Economist's executive editor Daniel Franklin, the book is a collection of essays by eminent personalities like Frank Wilczek, Alastair Reynolds, Nancy Kress and Melinda Gates--each one of whom tells their version of the future. An essay by Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford in the UK, talks about Artificial Intelligence (AI). In "The Ethics Of Artificial Intelligence", he says the threat of monstrous machines dominating humanity is imaginary, but the risk of humanity misusing its machines is real. In an email interview, Prof. Floridi talks about how real, or not, the threat of AI is.